14 research outputs found

    A wot-based method for creating digital sentinel twins of iot devices

    Get PDF
    The data produced by sensors of IoT devices are becoming keystones for organizations to conduct critical decision-making processes. However, delivering information to these processes in real-time represents two challenges for the organizations: the first one is achieving a constant dataflow from IoT to the cloud and the second one is enabling decision-making processes to retrieve data from dataflows in real-time. This paper presents a cloud-based Web of Things method for creating digital twins of IoT devices (named sentinels).The novelty of the proposed approach is that sentinels create an abstract window for decision-making processes to: (a) find data (e.g., properties, events, and data from sensors of IoT devices) or (b) invoke functions (e.g., actions and tasks) from physical devices (PD), as well as from virtual devices (VD). In this approach, the applications and services of decision-making processes deal with sentinels instead of managing complex details associated with the PDs, VDs, and cloud computing infrastructures. A prototype based on the proposed method was implemented to conduct a case study based on a blockchain system for verifying contract violation in sensors used in product transportation logistics. The evaluation showed the effectiveness of sentinels enabling organizations to attain data from IoT sensors and the dataflows used by decision-making processes to convert these data into useful information

    On the efficient delivery and storage of IoT data in edge-fog-cloud environments

    Get PDF
    This article belongs to the Special Issue Internet of Things, Sensing and Cloud ComputingCloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge-fog-cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machine learning applications processing data at the edge). Data containers are interconnected through a secure and efficient content delivery network, which transparently and automatically performs the continuous delivery of data through the edge-fog-cloud. A prototype of our proposed scheme was implemented and evaluated in a case study based on the management of electrocardiogram sensor data. The obtained results reveal the suitability and efficiency of the proposed scheme.This research was funded by the project 41756 "Plataforma tecnológica para la gestión, aseguramiento, intercambio y preservación de grandes volúmenes de datos en salud y construcción de un repositorio nacional de servicios de análisis de datos de salud" by the PRONACES-CONACYT

    Kulla, a container-centric construction model for building infrastructure-agnostic distributed and parallel applications

    Get PDF
    This paper presents the design, development, and implementation of Kulla, a virtual container-centric construction model that mixes loosely coupled structures with a parallel programming model for building infrastructure-agnostic distributed and parallel applications. In Kulla, applications, dependencies and environment settings, are mapped with construction units called Kulla-Blocks. A parallel programming model enables developers to couple those interoperable structures for creating constructive structures named Kulla-Bricks. In these structures, continuous dataflow and parallel patterns can be created without modifying the code of applications. Methods such as Divide&Containerize (data parallelism), Pipe&Blocks (streaming), and Manager/Block (task parallelism) were developed to create Kulla-Bricks. Recursive combinations of Kulla instances can be grouped in deployment structures called Kulla-Boxes, which are encapsulated into VCs to create infrastructure-agnostic parallel and/or distributed applications. Deployment strategies were created for Kulla-Boxes to improve the IT resource profitability. To show the feasibility and flexibility of this model, solutions combining real-world applications were implemented by using Kulla instances to compose parallel and/or distributed system deployed on different IT infrastructures. An experimental evaluation based on use cases solving satellite and medical image processing problems revealed the efficiency of Kulla model in comparison with some traditional state-of-the-art solutions.This work has been partially supported by the EU project "ASPIDE: Exascale Programing Models for Extreme Data Processing" under grant 801091 and the project "CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones" S2018/TCS-4423 from Madrid Regional Government

    Internet of Things orchestration using DagOn∗ workflow engine

    No full text
    The increasing of connected tiny, low-power, embedded devices, grouped into the generic definition of "Internet of Things" (IoT), raised remarkably the amount of in-situ collected data. However, at the time of writing, IoT devices have limited storage and computation resources if compared with a cloud computing or on-premises infrastructure. IoT devices often suffer for reduced connectivity due to the place of the deployment or other technical, environmental or economic reasons. In this work, we present the DagOn∗ workflow engine as a part of an IoT orchestration scenario oriented to operational environmental prediction. Our novel approach is devoted to join the two worlds of workflows, in which each task runs on a dynamically allocated computational infrastructure, with tiny jobs targeted to embedded devices hosting sensors and actuators. We show our preliminary results applied to a demonstration use case. We are confident that further development of the proposed technology will affect positively production applications for massive and geographically distributed data collection

    A microservice-based building block approach for scientific workflow engines: Processing large data volumes with dagonstar

    No full text
    The impact of machine learning algorithms on everyday life is overwhelming until the novel concept of datacracy as a new social paradigm. In the field of computational environmental science and, in particular, of applications of large data science proof of concept on the natural resources management this kind of approaches could make the difference between species surviving to potential extinction and compromised ecological niches. In this scenario, the use of high throughput workflow engines, enabling the management of complex data flows in production is rock solid, as demonstrated by the rise of recent tools as Parsl and DagOnStar. Nevertheless, the availability of dedicated computational resources, although mitigated by the use of cloud computing technologies, could be a remarkable limitation. In this paper, we present a novel and improved version of DagOnStar, enabling the execution of lightweight but recurring computational tasks on the microservice architecture. We present our preliminary results motivating our choices supported by some evaluations and a real-world use case

    A novel transversal processing model to build environmental big data services in the cloud

    No full text
    This paper presents a novel transversal, agnostic-infrastructure, and generic processing model to build environmental big data services in the cloud. Transversality is used for building processing structures (PS) by reusing/coupling multiple existent software for processing environmental monitoring, climate, and earth observation data, even in execution time, with datasets available in cloud-based repositories. Infrastructure-agnosticism is used for deploying/executing PSs on/in edge, fog, and/or cloud. Genericity is used to embed analytic, merging information, machine learning, and statistic micro-services into PSs for automatically and transparently converting PSs into big data services to support decision-making procedures. A prototype was developed for conducting case studies based on the data climate classification, earth observation products, and making predictions of air data pollution by merging different monitoring climate data sources. The experimental evaluation revealed the efficacy and flexibility of this model to create complex environmental big data services

    PuzzleMesh: A puzzle model to build mesh of agnostic services for edge-fog-cloud

    No full text
    This paper presents the design, development, and evaluation of PuzzleMesh, an agnostic service mesh composition model to process large volumes of data in edge-fog-cloud environments. This model is based on a puzzle metaphor where pieces, puzzles, and metapuzzles represent self-contained autonomous and reusable software artifacts encapsulated into containers and published as microservices. A piece represents the integration of apps with I/O interfaces (loops/sockets), parallel processing, and management software. A puzzle represents a processing structure (e.g., workflows) built coupling pieces through loops and sockets. Puzzles integrate structures with a microservice architecture, implicit continuous dataflows, and transparent data exchange management software. A metapuzzle represents a recursive assemble of puzzles. A mesh represents a pool of pieces, puzzles, and metapuzzles available for designers to choose artifacts to build services. A prototype developed using PuzzleMesh model was evaluated through case studies about the automatic construction of processing services for the acquisition, pre-processing, manufacturing, preserving, and visualizing of satellite imagery. A qualitative comparison revealed that PuzzleMesh provides a flexible way to build reusable and portable services and to improve the usability of the services. The case study also revealed that PuzzleMesh yielded better performance results than other state-of-the-art tools

    FedIDS: a federated cloud storage architecture and satellite image delivery service for building dependable geospatial platforms

    No full text
    Earth observation satellites produce large amounts of images/data that not only must be processed and preserved in reliable geospatial platforms but also efficiently disseminated among partners/researchers for creating derivative products through collaborative workflows. Organizations can face up this challenge in a cost-effective manner by using cloud services. However, outages and violations of integrity/confidentiality associated to this technology could arise. This article presents FedIDS, a suite of cloud-based components for building dependable geospatial platforms. The Fed component enables organizations to build shared geospatial data infrastructure through federation of independent cloud resources to withstand outages, whereas IDS avoids violations of integrity/confidentiality of images/data in sharing information and collaboration workflows. A FedIDS prototype, deployed in Spain and Mexico, was evaluated through a study case based on a satellite imagery captured by a Mexican antenna and another based on a satellite imagery of a European observation mission. The acquisition, storage and sharing of images among users of the federation, the exchange of images between Mexican and Spanish sites and outage scenarios were evaluated. The evaluation revealed the feasibility, reliability and efficiency of FedIDS, in comparison with available solutions, in terms of performance, storage consume and integrity/confidentiality when sharing images/data in collaborative scenarios

    An efficient pattern-based approach for workflow supporting large-scale science: The DagOnStar experience

    No full text
    Workflow engines are commonly used to orchestrate large-scale scientific computations such as, but not limited to weather, climate, natural disasters, food safety, and territorial management. However, to implement, manage, and execute real-world scientific applications in the form of workflows on multiple infrastructures (servers, clusters, cloud) remains a challenge. In this paper, we present DagOnStar (Directed Acyclic Graph OnAnything), a lightweight Python library implementing a workflow paradigm based on parallel patterns that can be executed on any combination of local machines, on-premise high performance computing clusters, containers, and cloud-based virtual infrastructures. DagOnStar is designed to minimize data movement to reduce the application storage footprint. A case study based on a real-world application is explored to illustrate the use of this novel workflow engine: a containerized weather data collection application deployed on multiple infrastructures. An experimental comparison with other state-of-the-art workflow engines shows that DagOnStar can run workflows on multiple types of infrastructure with an improvement of 50.19% in run time when using a parallel pattern with eight task-level workers
    corecore